BG/NBD Model - Stan Implementation

Author

Abdullah Mahmood

Published

March 21, 2025

In this notebook we show how to fit a BG/NBD model in Stan. We compare the results with the lifetimes package. The model is presented in the paper: Fader, P. S., Hardie, B. G., & Lee, K. L. (2005). “Counting your customers” the easy way: An alternative to the Pareto/NBD model. Marketing science, 24(2), 275-284.

Introduction

Faced with a database containing information on the frequency and timing of transactions for a list of customers, it is natural to try to make forecasts about future purchasing. These projections often range from aggregate sales trajectories (e.g., for the next 52 weeks), to individual-level conditional expectations (i.e., the best guess about a particular customer’s future purchasing, given information about his past behavior). The Pareto/NBD is robust model used for this purpose. It was introduced in “Counting Your Customers” framework originally proposed by Schmittlein, Morrison, and Colombo (1987), hereafter SMC. This model describes repeat-buying behavior in settings where customer “dropout” is unobserved: it assumes that customers buy at a steady rate (albeit in a stochastic manner) for a period of time, and then become inactive. More specifically, time to “dropout” is modeled using the Pareto (exponential-gamma mixture) timing model, and repeat-buying behavior while active is modeled using the NBD (Poisson-gamma mixture) counting model.

In this notebook, we implement the beta-geometric/negative binomial distribution (BG/NBD), which represents a slight variation in the behavioral “story” that lies at the heart of SMC’s original work, but it is vastly easier to implement. The model parameters can be obtained quite easily using Python and Stan, with no appreciable loss in the model’s ability to fit or predict customer purchasing patterns. We will introduce the BG/NBD model from first principles, and present the expressions required for making individual-level statements about future buying behavior.

Before developing the BG/NBD model, we briefly review the Pareto/NBD model. Then we outline the assumptions of the BG/NBD model, deriving the key expressions at the individual-level, and for a randomly-chosen individual.

Pareto/NBD

The Pareto/NBD model is based on five assumptions:

  1. While active, the number of transactions made by a customer in a time period of length \(t\) is distributed Poisson with transaction rate \(\lambda_t\).

    \[ x \sim \text{Poisson}(\lambda) \]

  2. Heterogeneity in transaction rates across customers follows a gamma distribution with shape parameter \(r\) and scale parameter \(\alpha\).

    \[ \lambda \sim \text{Gamma}(r, \alpha) \]

  3. Each customer has an unobserved “lifetime” of length \(\tau\). This point at which the customer becomes inactive is distributed exponential with dropout rate \(\mu\).

    \[ \tau \sim \text{Exponential}(\mu) \]

  4. Heterogeneity in dropout rates across customers follows a gamma distribution with shape parameter \(s\) and scale parameter \(\beta\).

    \[ \mu \sim \text{Gamma}(s, \beta) \]

  5. The transaction rate \(\lambda\) and the dropout rate \(\mu\) vary independently across customers.

The Pareto/NBD (and, as we will see shortly, the BG/NBD ) requires only two pieces of information about each customer’s past purchasing history: his “recency” (when his last transaction occurred) and “frequency” (how many transactions he made in a specified time period). The notation used to represent this information is \(\left(X=x, t_{x}, T\right)\), where \(x\) is the number of transactions observed in the time period \((0, T]\) and \(t_{x}\left(0<t_{x} \leq T\right)\) is the time of the last transaction. Using these two key summary statistics, SMC derive expressions for a number of managerially relevant quantities, such as:

  • \(E[X(t)]\), the expected number of transactions in a time period of length \(t\), which is central to computing the expected transaction volume for the whole customer base over time.
  • \(P(X(t)=x)\), the probability of observing \(x\) transactions in a time period of length \(t\).
  • \(E\left(Y(t) \mid X=x, t_{x}, T\right)\), the expected number of transactions in the period \((T, T+t]\) for an individual with observed behavior ( \(X=x, t_{x}, T\) ).

The likelihood function associated with the Pareto/NBD model is quite complex, involving numerous evaluations of the Gaussian hypergeometric function. Multiple evaluations of the Gaussian hypergeometric are very demanding from a computational standpoint. Furthermore, the precision of some numerical procedures used to evaluate this function can vary substantially over the parameter space; this can cause major problems for numerical optimization routines as they search for the maximum of the likelihood function. In contrast, the BG/NBD model, can be implemented very quickly and efficiently via MLE and MCMC.

BG/NBD Assumptions

Most aspects of the BG/NBD model directly mirror those of the Pareto/NBD. The only difference lies in the story being told about how/when customers become inactive. The Pareto timing model assumes that dropout can occur at any point in time, independent of the occurrence of actual purchases. If we assume instead that dropout occurs immediately after a purchase, we can model this process using the beta-geometric (BG) model.

More formally, the BG/NBD model is based on the following five assumptions (the first two of which are identical to the corresponding Pareto/NBD assumptions):

  1. While active, the number of transactions made by a customer follows a Poisson process with transaction rate \(\lambda\). This is equivalent to assuming that the time between transactions is distributed exponential with transaction rate \(\lambda\), i.e.,

    \[ f\left(t_{j} \mid t_{j-1} ; \lambda\right)=\lambda e^{-\lambda\left(t_{j}-t_{j-1}\right)}, \quad t_{j}>t_{j-1} \geq 0 \]

    \[ \text{"Wait"} \sim \text{Exponential}(\lambda) \]

  2. Heterogeneity in \(\lambda\) follows a gamma distribution with pdf

    \[ \begin{equation*} f(\lambda \mid r, \alpha)=\frac{\alpha^{r} \lambda^{r-1} e^{-\lambda \alpha}}{\Gamma(r)}, \quad \lambda>0 \end{equation*} \]

    \[ \lambda \sim \text{Gamma}(r,\alpha) \]

  3. After any transaction, a customer becomes inactive with probability \(p\). Therefore the point at which the customer “drops out” is distributed across transactions according to a (shifted) geometric distribution with pmf

    \[ P(\text { inactive immediately after } j \text { th transaction })=p(1-p)^{j-1}, \quad j=1,2,3, \ldots \]

    \[ \text{"Drop Out"} \sim \text{Bernoulli}(p) \]

  4. Heterogeneity in \(p\) follows a beta distribution with pdf

    \[ \begin{equation*} f(p \mid a, b)=\frac{p^{a-1}(1-p)^{b-1}}{B(a, b)}, \quad 0 \leq p \leq 1 \end{equation*} \]

    where \(B(a, b)\) is the beta function, which can be expressed in terms of gamma functions: \(B(a, b)=\Gamma(a) \Gamma(b) / \Gamma(a+b)\).

    \[ p \sim \text{Beta}(a, b) \]

  5. The transaction rate \(\lambda\) and the dropout probability \(p\) vary independently across customers.

Model Development at the Individual Level

Derivation of the Likelihood Function

Consider a customer who had \(x\) transactions in the period \((0, T]\) with the transactions occurring at \(t_{1}, t_{2}, \ldots, t_{x}\) :

We derive the individual-level likelihood function in the following manner:

  • the likelihood of the first transaction occurring at \(t_{1}\) is a standard exponential likelihood component, which equals \(\lambda e^{-\lambda t_{1}}\).
  • the likelihood of the second transaction occurring at \(t_{2}\) is the probability of remaining active at \(t_{1}\) times the standard exponential likelihood component, which equals \((1-p) \lambda e^{-\lambda\left(t_{2}-t_{1}\right)}\).
  • the likelihood of the \(x\) th transaction occurring at \(t_{x}\) is the probability of remaining active at \(t_{x-1}\) times the standard exponential likelihood component, which equals ( \(1-\) p) \(\lambda e^{-\lambda\left(t_{x}-t_{x-1}\right)}\).
  • Finally, the likelihood of observing zero purchases in \(\left(t_{x}, T\right]\) is the probability the customer became inactive at \(t_{x}\), plus the probability he remained active but made no purchases in this interval, which equals \(p+(1-p) e^{-\lambda\left(T-t_{x}\right)}\).

Therefore,

\[ \begin{aligned} L\left(\lambda, p \mid t_{1}, t_{2}, \ldots, t_{x}, T\right) & =\lambda e^{-\lambda t_{1}}(1-p) \lambda e^{-\lambda\left(t_{2}-t_{1}\right)} \cdots(1-p) \lambda e^{-\lambda\left(t_{x}-t_{x-1}\right)}\left\{p+(1-p) e^{-\lambda\left(T-t_{x}\right)}\right\} \\ & =p(1-p)^{x-1} \lambda^{x} e^{-\lambda t_{x}}+(1-p)^{x} \lambda^{x} e^{-\lambda T} \end{aligned} \]

As pointed out earlier for the Pareto/NBD, note that information on the timing of the \(x\) transactions is not required; a sufficient summary of the customer’s purchase history is \(\left(X=x, t_{x}, T\right)\).

Similar to SMC, we assume that all customers are active at the beginning of the observation period; therefore the likelihood function for a customer making 0 purchases in the interval \((0, T]\) is the standard exponential survival function:

\[ L(\lambda \mid X=0, T)=e^{-\lambda T} \]

Thus we can write the individual-level likelihood function as

\[ \begin{equation*} L(\lambda, p \mid X=x, T)=(1-p)^{x} \lambda^{x} e^{-\lambda T}+\delta_{x>0} p(1-p)^{x-1} \lambda^{x} e^{-\lambda t_{x}} \end{equation*} \]

where \(\delta_{x>0}=1\) if \(x>0,0\) otherwise.

Derivation of \(P(X(t)=x)\)

Let the random variable \(X(t)\) denote the number of transactions occurring in a time period of length \(t\) (with a time origin of 0 ). To derive an expression for the \(P(X(t)=x)\), we recall the fundamental relationship between inter-event times and the number of events: \(X(t) \geq x \Leftrightarrow\) \(T_{x} \leq t\) where \(T_{x}\) is the random variable denoting the time of the \(x\) th transaction. Given our assumption regarding the nature of the dropout process,

\[ \begin{align*} P(X(t)=x)=& P \text{(active after } x \text{th purchase}) \\ &\cdot P\left(T_{x} \leq t \text { and } T_{x+1}>t\right) +\delta_{x>0} \\ &\cdot P(\text { becomes inactive after } x \text { th purchase }) \\ &\cdot P\left(T_{x} \leq t\right) \end{align*} \]

Given the assumption that the time between transactions is characterized by the exponential distribution, \(P\left(T_{x} \leq t\right.\) and \(\left.T_{x+1}>t\right)\) is simply the Poisson probability that \(X(t)=x\), and \(P\left(T_{x} \leq t\right)\) is the Erlang- \(x\) cdf. Therefore

\[ \begin{equation*} P(X(t)=x \mid \lambda, p)=(1-p)^{x} \frac{(\lambda t)^{x} e^{-\lambda t}}{x!}+\delta_{x>0} p(1-p)^{x-1}\cdot\left[1-e^{-\lambda t} \sum_{j=0}^{x-1} \frac{(\lambda t)^{j}}{j!}\right] \end{equation*} \]

Derivation of \(E[X(t)]\)

Given that the number of transactions follows a Poisson process, \(E[X(t)]\) is simply \(\lambda t\) if the customer is active at \(t\). For a customer who becomes inactive at \(\tau \leq t\), the expected number of transactions in the period \((0, \tau]\) is \(\lambda \tau\).

But what is the likelihood that a customer becomes inactive at \(\tau\) ? Conditional on \(\lambda\) and \(p\),

\[ \begin{aligned} P(\tau>t)=P(\text { active at } t \mid \lambda, p) & =\sum_{j=0}^{\infty}(1-p)^{j} \frac{(\lambda t)^{j} e^{-\lambda t}}{j!} \\ & =e^{-\lambda p t} \end{aligned} \]

This implies that the pdf of the dropout time is given by \(g(\tau \mid \lambda, p)=\lambda p e^{-\lambda p \tau}\). (Note that this takes on an exponential form. But it features an explicit association with the transaction rate \(\lambda\), in contrast with the Pareto/NBD, which has an exponential dropout process that is independent of the transaction rate.) It follows that the expected number of transactions in a time period of length \(t\) is given by

\[ \begin{align*} E(X(t) \mid \lambda, p) & =\lambda t \cdot P(\tau>t)+\int_{0}^{t} \lambda \tau g(\tau \mid \lambda, p) d \tau \\ & =\frac{1}{p}-\frac{1}{p} e^{-\lambda p t} \end{align*} \]

Model Development for a Randomly-Chosen Individual

All the expressions developed above are conditional on the transaction rate \(\lambda\) and the dropout probability \(p\), both of which are unobserved. To derive the equivalent expressions for a randomly chosen customer, we take the expectation of the individual-level result over the mixing distributions for \(\lambda\) and \(p\), as given in (1) and (2). This yields the follow results.

  • Taking the expectation of (3) over the distribution of \(\lambda\) and \(p\) results in the following expression for the likelihood function for a randomly-chosen customer with purchase history \(\left(X=x, t_{x}, T\right):\)

\[ \begin{align*} L\left(r, \alpha, a, b \mid X=x, t_{x}, T\right)&=\frac{B(a, b+x)}{B(a, b)} \frac{\Gamma(r+x) \alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}} \\ &+\delta_{x>0} \frac{B(a+1, b+x-1)}{B(a, b)} \frac{\Gamma(r+x) \alpha^{r}}{\Gamma(r)\left(\alpha+t_{x}\right)^{r+x}} \end{align*} \]

The four BG/NBD model parameters \((r, \alpha, a, b)\) can be estimated via the method of maximum likelihood in the following manner. Suppose we have a sample of \(N\) customers, where customer \(i\) had \(X_{i}=x_{i}\) transactions in the period \((0, T_{i}]\), with the last transaction occurring at \(t_{x_{i}}\). The sample log-likelihood function given by is

\[ \begin{equation*} L L(r, \alpha, a, b)=\sum_{i=1}^{N} \ln \left[L\left(r, \alpha, a, b \mid X_{i}=x_{i}, t_{x_{i}}, T_{i}\right)\right] \end{equation*} \]

This can be maximized using standard numerical optimization routines.

  • Taking the expectation of (4) over the distribution of \(\lambda\) and \(p\) results in the following expression for the probability of observing \(x\) purchases in a time period of length \(t\) :

\[ \begin{align*} P(X(t)= x \mid r, \alpha, a, b) &=\frac{B(a, b+x)}{B(a, b)} \frac{\Gamma(r+x)}{\Gamma(r) x!}\left(\frac{\alpha}{\alpha+t}\right)^{r}\left(\frac{t}{\alpha+t}\right)^{x} \\ &+\delta_{x>0} \frac{B(a+1, b+x-1)}{B(a, b)} \\ &\cdot\left[1-\left(\frac{\alpha}{\alpha+t}\right)^{r}\left\{\sum_{j=0}^{x-1} \frac{(\Gamma(r+j)}{\Gamma(r) j!}\left(\frac{t}{\alpha+t}\right)^{j}\right\}\right] \end{align*} \]

  • Finally, taking the expectation of (5) over the distribution of \(\lambda\) and \(p\) results in the following expression for the expected number of purchases in a time period of length \(t\) :

\[ \begin{equation*} E(X(t) \mid r, \alpha, a, b)=\frac{a+b-1}{a-1}\left[1-\left(\frac{\alpha}{\alpha+t}\right)^{r}{ }_{2} F_{1}\left(r, b ; a+b-1 ; \frac{t}{\alpha+t}\right)\right] \end{equation*} \]

where \({ }_{2} F_{1}(\cdot)\) is the Gaussian hypergeometric function. (See the Appendix for details of the derivation.)

Note that this final expression requires a single evaluation of the Gaussian hypergeometric function. But it is important to emphasize that this expectation is only used after the likelihood function has been maximized. A single evaluation of the Gaussian hypergeometric function for a given set of parameters is relatively straightforward, and can be closely approximated with a polynomial series, even in a modeling environment such as Microsoft Excel.

In order for the BG/NBD model to be of use in a forward-looking customer-base analysis, we need to obtain an expression for the expected number of transactions in a future period of length \(t\) for an individual with past observed behavior \(\left(X=x, t_{x}, T\right)\). We provide a careful derivation in the Appendix, but here is the key expression:

\[ \begin{align*} & E\left(Y(t) \mid X=x, t_{x}, T, r, \alpha, a, b\right)= \\ & \qquad \frac{\frac{a+b+x-1}{a-1}\left[1-\left(\frac{\alpha+T}{\alpha+T+t}\right)^{r+x}{ }_{2} F_{1}\left(r+x, b+x ; a+b+x-1 ; \frac{t}{\alpha+T+t}\right)\right]}{1+\delta_{x>0} \frac{a}{b+x-1}\left(\frac{\alpha+T}{\alpha+t_{x}}\right)^{r+x}} \end{align*} \]

Once again, this expectation requires a single evaluation of the Gaussian hypergeometric function for any customer of interest, but this is not a burdensome task. The remainder of the expression is simple arithmetic.

Maximum likelihood estimates of the model parameters (\(r\), \(\alpha\), \(a\), \(b\)) are obtained by maximizing the log-likelihood function given in (7) above. Standard numerical optimization methods are employed, using the SciPy minimize and Stan’s optimize method, to obtain the parameter estimates. To implement the model in SciPy, we rewrite the log-likelihood function, (6), as

\[ L(r, \alpha, a, b | X = x, t_x, T) = A_1 \cdot A_2 \cdot (A_3 + \delta_{x > 0} A_4) \]

where

\[ A_1 = \frac{\Gamma(r + x)\alpha^r}{\Gamma(r)} \quad A_2 = \frac{\Gamma(a + b)\Gamma(b + x)}{\Gamma(b)\Gamma(a + b + x)} \]

\[ A_3 = \left( \frac{1}{\alpha + T} \right)^{r + x} \quad A_4 = \left( \frac{a}{b + x - 1} \right) \left( \frac{1}{\alpha + t_x} \right)^{r + x} \]

Computing \(P(\text{alive})\) Using the BG/NBD Model

One of the key results we associate with the Pareto/NBD model (Schmittlein et al. 1987) is an expression for \(P(\text{alive}\mid x, t_x, T)\), the probability that a customer with purchase history \((x, t_x, T)\) is “alive” at time \(T\).

Deriving the expression for \(P(\text{alive})\) under the BG/NBD model is a trivial exercise. Given the assumptions of the BG/NBD model, the probability that a customer with purchase history \((x, t_x, T)\) is alive at time \(T\) is simply the probability that he did not “die” at \(t_x\).

Recall the BG/NBD likelihood function:

\[ \begin{align*} L(r,\alpha,a,b \mid x,t_x,T) &= \frac{B(a,b+x)}{B(a,b)} \frac{\Gamma(r + x)\alpha^{r}}{\Gamma(r)(\alpha + T)^{r+x}} \\ &+ \delta_{x>0} \frac{B(a+1,b+x-1)}{B(a,b)} \frac{\Gamma(r + x)\alpha^{r}}{\Gamma(r)(\alpha + t_x)^{r+x}}. \end{align*} \]

Noting that the first half of the expression represents the likelihood under the assumption that the customer did not die at \(t_x\) (and is therefore alive at \(T\)), while the second half represents the likelihood under the assumption that the customer died at \(t_x\), the application of Bayes’ theorem gives us

\[ \begin{align*} P(\text{alive} &\mid x,t_x,T,r,\alpha,a,b) \\ &= \frac{\frac{B(a,b+x)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}}}{L(r,\alpha,a,b \mid x,t_x,T)} \\ &= \frac{1}{1 + \delta_{x>0} \frac{a}{b+x-1} \left( \frac{\alpha + T}{\alpha + t_x} \right)^{r+x}}. \end{align*} \]

We note that \(P(\text{alive}) = 1\) for a customer who made no purchases in the interval \((0,T]\); this follows from the model’s assumptions that death occurs after a purchase and that customers are alive at the beginning of the observation period. Those interested in the \(P(\text{alive})\) metric may view this as a shortcoming of the model.

There are two ways to overcome this potential problem.

  1. Zero-Inflated Variant:
    Assume \(\pi \times 100\) of the customer base is “dead” at \(T = 0\). The likelihood function becomes:

    \[ \begin{aligned} L(r,\alpha,a,b,\pi \mid x,t_x,T) &= \pi \delta_{x=0} + (1-\pi) \bigg\{ \frac{B(a,b+x)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}} \\ & \quad + \delta_{x>0} \frac{B(a+1,b+x-1)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+t_x)^{r+x}} \bigg\}. \end{aligned} \]

    Then:

    \[ \begin{align*} P(\text{alive} \mid x,t_x,T,r,\alpha,a,b,\pi) \\ = \begin{cases} \frac{(1-\pi)}{\pi \left( \frac{\alpha+T}{\alpha} \right)^r + (1-\pi)} & x = 0 \\ \frac{1}{1 + \frac{a}{b+x-1} \left( \frac{\alpha+T}{\alpha+t_x} \right)^{r+x}} & x > 0 \end{cases} \end{align*} \]

  2. Flip-at-Time-0 Variant:
    Modify the death process assumption:

    \[ \begin{align*} L(r,\alpha,a,b \mid x,t_x,T) &= \frac{B(a,b+x+1)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+T)^{r+x}} \\ &+ \frac{B(a+1,b+x)}{B(a,b)} \frac{\Gamma(r+x)\alpha^{r}}{\Gamma(r)(\alpha+t_x)^{r+x}}. \end{align*} \]

    Then:

    \[ P(\text{alive} \mid x,t_x,T,r,\alpha,a,b) = \frac{1}{1 + \frac{a}{b+x} \left( \frac{\alpha+T}{\alpha+t_x} \right)^{r+x}}, \] which is less than \(1\) for \(x = 0\) in the interval \((0,T]\).

Imports

Packages

Code
from utils import CDNOW, Stan, StanQuap
from scipy.optimize import minimize
import arviz as az
import polars as pl
import numpy as np


import matplotlib.pyplot as plt
%config InlineBackend.figure_formats = ['svg']

Data

data = CDNOW(master=False, calib_p=273) # 39 week calibration period
rfm_data = data.rfm_summary().select("P1X", "t_x", "T").collect().to_numpy()
p1x, t_x, T = rfm_data[:, 0], rfm_data[:, 1], rfm_data[:, 2]

Recall from the paper the following definitions:

  • p1x represents the number of repeat purchases the customer has made. This means that it’s one less than the total number of purchases. This is actually slightly wrong. It’s the count of time periods the customer had a purchase in. So if using days as units, then it’s the count of days the customer had a purchase on.
  • T represents the age of the customer in whatever time units chosen (weekly, in the above dataset). This is equal to the duration between a customer’s first purchase and the end of the period under study.
  • t_x represents the age of the customer when they made their most recent purchases. This is equal to the duration between a customer’s first purchase and their latest purchase. (Thus if they have made only 1 purchase, the recency is 0.)

Model Specification

The BG/NBD model is a probabilistic model that describes the buying behavior of a customer in the non-contractual setting. It is based on the following assumptions for each customer:

Frequency Process

  1. While active, the time between transactions is distributed exponential with transaction rate, i.e.

\[ f(t_{j} \mid t_{j-1}; \lambda) = \lambda \exp(-\lambda (t_{j} - t_{j - 1})), \quad t_{j} \geq t_{j - 1} \geq 0 \]

  1. Heterogeneity in \(\lambda\) follows a gamma distribution with pdf

\[ f(\lambda \mid r, \alpha) = \frac{\alpha^{r}\lambda^{r - 1}\exp(-\lambda \alpha)}{\Gamma(r)}, \quad \\lambda > 0 \]

Dropout Process

  1. After any transaction, a customer becomes inactive with probability \(p\).

  2. Heterogeneity in \(p\) follows a beta distribution with pdf

\[ f(p \mid a, b) = \frac{\Gamma(a + b)}{\Gamma(a) \Gamma(b)} p^{a - 1}(1 - p)^{b - 1}, \quad 0 \leq p \leq 1 \]

  1. The transaction rate \(\lambda\) and the dropout probability \(p\) vary independently across customers.

Instead of estimating \(\lambda\) and \(p\) for each specific customer, we do it for a randomly chosen customer, i.e. we work with the expected values of the parameters. Hence, we are interesting in finding the posterior distribution of the parameters \(r\), \(\alpha\), \(a\), and \(b\).

Analytical MLE

Standard SciPy Implementation

Code
import numpy as np
from scipy.special import gammaln, hyp2f1, gamma, factorial

def bgnbd_ll(x, p1x, t_x, T):
    r, alpha, a, b = x

    # Logarithm calculations with numerical stability
    log_alpha = np.log(np.clip(alpha, 1e-10, None))  # Avoid log(0) by clipping to a small value
    log_alpha_t_x = np.log(np.clip(alpha + t_x, 1e-10, None))

    # Components of the log-likelihood
    D_1 = (
        gammaln(r + p1x)
        - gammaln(r)
        + gammaln(a + b)
        + gammaln(b + p1x)
        - gammaln(b)
        - gammaln(a + b + p1x)
    )
    D_2 = r * log_alpha - (r + p1x) * log_alpha_t_x
    C_3 = ((alpha + t_x) / (alpha + T)) ** (r + p1x)
    C_4 = a / (b + p1x - 1)

    # Handle cases where p1x > 0 and apply log to valid values
    log_term = np.log(np.clip(C_3 + C_4, 1e-10, None))
    result = D_1 + D_2 + np.where(p1x > 0, log_term, np.log(np.clip(C_3, 1e-10, None)))

    return -np.sum(result)

def bgnbd_est():
    guess={'r': 0.01, 'alpha': 0.01, 'a': 0.01, 'b': 0.01}
    # Bounds for the optimization
    bnds = [(1e-6, np.inf) for _ in range(4)]

    # Optimization using minimize
    return minimize(
        bgnbd_ll,
        x0=list(guess.values()),
        method="BFGS",
        args=(p1x, t_x, T)
    )

result = bgnbd_est()
r, alpha, a, b = result.x
ll = result.fun

print(
f"""r = {r:0.4f}
α = {alpha:0.4f}
a = {a:0.4f}
b = {b:0.4f}
Log-Likelihood = {-ll:0.4f}"""
)

index = index=["r", "α", "a", "b"]
scipy_params = result.x
var_mat = result.hess_inv
se = np.sqrt(np.diag(var_mat))
plo = scipy_params - 1.96 * se
phi = scipy_params + 1.96 * se
pl.DataFrame(
    {
        "Parameter": index,
        "Coef": scipy_params,
        "SE (Coef)": se,
        "5.5%": plo,
        "94.5%": phi,
    }
)
r = 0.2426
α = 4.4136
a = 0.7929
b = 2.4259
Log-Likelihood = -9582.4292
shape: (4, 5)
Parameter Coef SE (Coef) 5.5% 94.5%
str f64 f64 f64 f64
"r" 0.242595 0.012417 0.218258 0.266931
"α" 4.413603 0.476973 3.478735 5.348471
"a" 0.792923 0.180615 0.438918 1.146928
"b" 2.425909 0.777435 0.902136 3.949683

Cameron Davidson-Pilon’s lifetimes Implementation

Source: BetaGeoFitter, fit function

Code
import autograd.numpy as np
from autograd.scipy.special import gammaln, beta, gamma
from autograd import value_and_grad, hessian

def negative_log_likelihood(log_params, x, t_x, T):
    params = np.exp(log_params)
    r, alpha, a, b = params

    A_1 = gammaln(r + x) - gammaln(r) + r * np.log(alpha)
    A_2 = gammaln(a + b) + gammaln(b + x) - gammaln(b) - gammaln(a + b + x)
    A_3 = -(r + x) * np.log(alpha + T)
    A_4 = np.log(a) - np.log(b + np.maximum(x, 1) - 1) - (r + x) * np.log(t_x + alpha)

    max_A_3_A_4 = np.maximum(A_3, A_4)

    ll = (A_1 + A_2 + np.log(np.exp(A_3 - max_A_3_A_4) + np.exp(A_4 - max_A_3_A_4) * (x > 0)) + max_A_3_A_4)

    return -ll.sum()

def BetaGeoFitter(guess={'r': 0.1, 'alpha': 0.1, 'a': 0.0, 'b': 0.1}):
    
    # Bounds for the optimization
    # bnds = [(1e-6, np.inf) for _ in range(4)]

    # Optimization using minimize
    return minimize(
        value_and_grad(negative_log_likelihood),
        jac=True,
        method=None,
        args=(p1x, t_x, T),
        tol=1e-7, 
        x0=list(guess.values()),
        options={'disp': True}
    )

result = BetaGeoFitter()
r, alpha, a, b = np.exp(result.x)
ll = result.fun

print(
f"""r = {r:0.4f}
α = {alpha:0.4f}
a = {a:0.4f}
b = {b:0.4f}
Log-Likelihood = {-ll:0.4f}"""
)

index = index=["r", "α", "a", "b"]
lifetimes_params = np.exp(result.x)
hessian_mat = hessian(negative_log_likelihood)(result.x, p1x, t_x, T)
var_mat = (lifetimes_params ** 2) * np.linalg.inv(hessian_mat) # Variance-Covariance Matrix
se = np.sqrt(np.diag(var_mat))  # Standard Error
plo = lifetimes_params - 1.96 * se
phi = lifetimes_params + 1.96 * se
pl.DataFrame(
    {
        "Parameter": index,
        "Coef": lifetimes_params,
        "SE (Coef)": se,
        "5.5%": plo,
        "94.5%": phi,
    }
)
Optimization terminated successfully.
         Current function value: 9582.429207
         Iterations: 21
         Function evaluations: 26
         Gradient evaluations: 26
r = 0.2426
α = 4.4136
a = 0.7929
b = 2.4259
Log-Likelihood = -9582.4292
shape: (4, 5)
Parameter Coef SE (Coef) 5.5% 94.5%
str f64 f64 f64 f64
"r" 0.242595 0.012557 0.217982 0.267207
"α" 4.413602 0.378224 3.672283 5.154921
"a" 0.792922 0.185734 0.428884 1.15696
"b" 2.425907 0.705414 1.043295 3.808519

Stan Model

Standard Parameters

import numpy as np

stan_code = '''
data {
    int<lower=0> N;               // Number of customers
    array[N] int<lower=0> X;      // Number of transactions per customer
    vector<lower=0>[N] T;         // Total observation time per customer
    vector<lower=0>[N] t_x;        // Time of last transaction (0 if X=0)
}

parameters {
    real<lower=0> r;                   // gamma shape (r)
    real<lower=0> alpha;               // gamma scale (alpha)
    real<lower=0, upper=5> a;          // beta shape 1 (a)
    real<lower=0, upper=5> b;          // beta shape 2 (b)
}

model {
    // Weakly informative priors on log parameters
    r ~ weibull(2, 1);
    alpha ~ weibull(2, 10);
    a ~ uniform(0, 5);
    b ~ uniform(0, 5);

    for (n in 1:N) {
        int x = X[n];
        real tx = t_x[n];
        real t = T[n];
    
        if (x == 0) {
              // Likelihood for X=0: (alpha/(alpha + t))^r
              target += r * (log(alpha) - log(alpha + t));
        } else {
              // Term 1: B(a, b + x)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + t))^(r + x)
              real beta_term1 = lbeta(a, b + x) - lbeta(a, b);
              real gamma_term = lgamma(r + x) - lgamma(r);
              real term1 = gamma_term + beta_term1 + r * log(alpha) - (r + x) * log(alpha + t);
            
              // Term 2: B(a + 1, b + x - 1)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + tx))^(r + x)
              real beta_term2 = lbeta(a + 1, b + x - 1) - lbeta(a, b);
              real term2 = gamma_term + beta_term2 + r * log(alpha) - (r + x) * log(alpha + tx);
            
              // Log-sum-exp for numerical stability
              target += log_sum_exp(term1, term2);
        }
    }
}
'''
Code
data = {
    'N': len(p1x),
    'X': p1x.flatten().astype(int).tolist(),
    'T': T.flatten().tolist(),
    't_x': t_x.flatten().tolist()
}

stan_model = StanQuap(
    stan_file='stan_models/bg-nbd', 
    stan_code=stan_code, data=data, 
    algorithm='LBFGS', jacobian=False, 
    tol_rel_grad=1e-7, iter=5000
)

index = index=["r", "α", "a", "b"]
params = np.array(list(stan_model.opt_model.stan_variables().values()))
var_mat = var_mar = stan_model.vcov_matrix() # Variance-Covariance Matrix
se = np.sqrt(np.diag(var_mat))  # Standard Error
plo = params - 1.96 * se
phi = params + 1.96 * se
pl.DataFrame(
    {
        "Parameter": index,
        "Coef": params,
        "SE (Coef)": se,
        "5.5%": plo,
        "94.5%": phi,
    }
)
shape: (4, 5)
Parameter Coef SE (Coef) 5.5% 94.5%
str f64 f64 f64 f64
"r" 0.243677 0.012593 0.218994 0.26836
"α" 4.44674 0.379501 3.702918 5.190562
"a" 0.792209 0.185781 0.428078 1.15634
"b" 2.4276 0.706745 1.04238 3.81282
Code
x = [stan_model.opt_model.optimized_params_pd['r'][0],
     stan_model.opt_model.optimized_params_pd['alpha'][0],
     stan_model.opt_model.optimized_params_pd['a'][0],
     stan_model.opt_model.optimized_params_pd['b'][0]]

print("Log-Likelihood:", bgnbd_ll(x, p1x, t_x, T))
# print(negative_log_likelihood(np.log(np.array(x)), p1x, t_x, T))
Log-Likelihood: 9582.43342617708

Modified Parameters

stan_code = '''
data {
    int<lower=0> N;               // Number of customers
    array[N] int<lower=0> X;      // Number of transactions per customer
    vector<lower=0>[N] T;         // Total observation time per customer
    vector<lower=0>[N] t_x;        // Time of last transaction (0 if X=0)
}

parameters {
    real<lower=0> r;                         // Shape parameter for the Gamma prior on purchase rate
    real<lower=0> alpha;                     // Scale parameter for purchase rate
    real<lower=0, upper=1> phi_dropout;      // Mixture weight for dropout process (Uniform prior)
    real<lower=1> kappa_dropout;             // Scale parameter for dropout (Pareto prior)
}

transformed parameters {
    real a = phi_dropout * kappa_dropout;       // Dropout shape parameter (controls early dropout likelihood)
    real b = (1 - phi_dropout) * kappa_dropout; // Dropout scale parameter (controls later dropout likelihood)
}

model {
    // Priors:
    r ~ weibull(2, 1);                // Prior on r (purchase rate shape parameter)
    alpha ~ weibull(2, 10);           // Prior on alpha (purchase rate scale parameter)
    phi_dropout ~ uniform(0,1);       // Mixture component for dropout process
    kappa_dropout ~ pareto(1,1);      // Scale of dropout process

    for (n in 1:N) {
        int x = X[n];                 // Number of transactions for customer n
        real tx = t_x[n];              // Time of last transaction
        real t = T[n];                // Total observation time

        if (x == 0) {
            // Likelihood for customers with zero transactions:
            // Probability of no purchases during (0, T): (alpha/(alpha + t))^r
            // Likelihood for X=0: (alpha/(alpha + t))^r
            target += r * (log(alpha) - log(alpha + t));
        } else {
            // Term 1: Probability of surviving until T and making x purchases
            // Term 1: B(a, b + x)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + t))^(r + x)
            real beta_term1 = lbeta(a, b + x) - lbeta(a, b);  // Beta function term
            real gamma_term = lgamma(r + x) - lgamma(r);       // Gamma function term
            real term1 = gamma_term + beta_term1 + r * log(alpha) - (r + x) * log(alpha + t);
            
            // Term 2: Probability of surviving until t_x, then dropping out
            // Term 2: B(a + 1, b + x - 1)/B(a, b) * Γ(r + x)/Γ(r) * (alpha/(alpha + tx))^(r + x)
            real beta_term2 = lbeta(a + 1, b + x - 1) - lbeta(a, b);
            real term2 = gamma_term + beta_term2 + r * log(alpha) - (r + x) * log(alpha + tx);
            
            // Log-sum-exp for numerical stability
            target += log_sum_exp(term1, term2);
        }
    }
}
'''
Code
data = {
    'N': len(p1x),
    'X': p1x.flatten().astype(int).tolist(),
    'T': T.flatten().tolist(),
    't_x': t_x.flatten().tolist()
}

stan_model = StanQuap(
    stan_file='stan_models/bg-nbd-1', 
    stan_code=stan_code, data=data, 
    algorithm='LBFGS', jacobian=False, 
    tol_rel_grad=1e-7, iter=5000, 
    generated_var=['a', 'b']
)

index = ["r", "α", 'phi', 'kappa', "a", "b"]
MAP_params = np.array(list(stan_model.opt_model.stan_variables().values()))
var_mat = var_mar = stan_model.vcov_matrix() # Variance-Covariance Matrix
se = np.sqrt(np.diag(var_mat))  # Standard Error
se =  np.concatenate((se, np.array([se[-2] * se[-1] , (1 - se[-2]) * se[-1]])))
plo = MAP_params - 1.96 * se
phi = MAP_params + 1.96 * se
pl.DataFrame(
    {
        "Parameter": index,
        "Coef": MAP_params,
        "SE (Coef)": se,
        "5.5%": plo,
        "94.5%": phi,
    }
)
shape: (6, 5)
Parameter Coef SE (Coef) 5.5% 94.5%
str f64 f64 f64 f64
"r" 0.243721 0.0126 0.219025 0.268417
"α" 4.44371 0.379233 3.700413 5.187007
"phi" 0.25224 0.019638 0.213749 0.290731
"kappa" 2.79767 0.717935 1.390518 4.204822
"a" 0.705685 0.014099 0.678051 0.733319
"b" 2.09198 0.703836 0.712462 3.471498
Code
x = [stan_model.opt_model.optimized_params_pd['r'][0],
     stan_model.opt_model.optimized_params_pd['alpha'][0],
     stan_model.opt_model.optimized_params_pd['a'][0],
     stan_model.opt_model.optimized_params_pd['b'][0]]

print("Log-Likelihood:", bgnbd_ll(x, p1x, t_x, T))
# print(negative_log_likelihood(np.log(np.array(x)), p1x, t_x, T))
Log-Likelihood: 9582.570545837596

MCMC Model Fitting

mcmc = stan_model.stan_model.sample(data=data)
inf_data = az.from_cmdstanpy(mcmc)
print(mcmc.diagnose())
                                                                                                                                                                                                                                                                                                                                
Checking sampler transitions treedepth.
Treedepth satisfactory for all transitions.

Checking sampler transitions for divergences.
No divergent transitions found.

Checking E-BFMI - sampler transitions HMC potential energy.
E-BFMI satisfactory.

Rank-normalized split effective sample size satisfactory for all parameters.

Rank-normalized split R-hat values satisfactory for all parameters.

Processing complete, no problems detected.

Model Summary

Code
mcmc.summary()
Mean MCSE StdDev MAD 5% 50% 95% ESS_bulk ESS_tail R_hat
lp__ -9587.830000 0.038271 1.478030 1.260210 -9590.660000 -9587.510000 -9586.130000 1638.18 2295.30 1.00122
r 0.245939 0.000266 0.012932 0.012899 0.225431 0.245476 0.268304 2389.00 2360.28 1.00304
alpha 4.518880 0.008292 0.392577 0.389701 3.901170 4.509290 5.181600 2257.79 2305.97 1.00166
phi_dropout 0.248495 0.000408 0.019934 0.020022 0.216679 0.248123 0.282067 2400.60 1836.54 1.00208
kappa_dropout 3.166630 0.018290 0.917776 0.820656 1.991020 3.002720 4.938250 2527.00 2418.74 1.00087
a 0.776184 0.003475 0.191351 0.172787 0.522343 0.742927 1.141710 2967.40 2552.57 1.00072
b 2.390440 0.015009 0.735329 0.645732 1.441440 2.253520 3.832530 2439.03 2387.61 1.00108

Model Trace Plot

Code
axes = az.plot_trace(
    data=inf_data,
    compact=True,
    kind="rank_bars",
    backend_kwargs={"figsize": (12, 9), "layout": "constrained"},
)
plt.gcf().suptitle("BG/NBD Model Trace", fontsize=18, fontweight="bold")
plt.tight_layout();
/var/folders/0s/z9xp988n3j78zfjwg3y616x00000gn/T/ipykernel_39543/1157014386.py:8: UserWarning:

The figure layout has changed to tight

Model Posterior Plot

Code
fig, axes = plt.subplots(2, 2, figsize=(12, 9), sharex=False, sharey=False)

axes = axes.flatten()

for i, var_name in enumerate(["r", "alpha", "a", "b"]):
    ax = axes[i]
    az.plot_posterior(
        inf_data.posterior[var_name].values.flatten(),
        color="C0",
        point_estimate="mean",
        ax=ax,
        label="MCMC",
    )
    ax.axvline(x=stan_model.opt_model.stan_variable(var_name), color="C1", linestyle="--", label="Stan MAP")
    ax.axvline(x=lifetimes_params[i], color="C2", linestyle="--", label="Lifetimes")
    ax.axvline(x=scipy_params[i], color="C3", linestyle="--", label="SciPy")
    ax.legend(loc="upper right")
    ax.set_title(var_name)

plt.gcf().suptitle("BG/NBD Model Parameters", fontsize=18, fontweight="bold")
plt.tight_layout();

The r and alpha purchase rate parameters are quite similar for all three models, but the a and b dropout parameters are better approximated with the phi_dropout and kappa_dropout parameters when fitted with MCMC.

Prior and Posterior Predictive Checks

PPCs allow us to check the efficacy of our priors, and the performance of the fitted posteriors.

Prior Predictive Check

Let’s see how the model performs in a prior predictive check, where we sample from the default priors before fitting the model:

stan_prior_code = '''
data {
    int<lower=0> N;
    vector<lower=0>[N] T;
}

generated quantities {
    real r = weibull_rng(2, 1);
    real alpha = weibull_rng(2, 10);
    real phi_dropout = uniform_rng(0, 1);
    real kappa_dropout = pareto_rng(1, 1);

    real a = phi_dropout * kappa_dropout;
    real b = (1 - phi_dropout) * kappa_dropout;

    array[N] int X_rep;
    vector[N] t_x_rep;

    for (n in 1:N) {
        real lambda_n = gamma_rng(r, alpha);
        real p_n = beta_rng(a, b);
        real current_time = 0;
        int x = 0;
        real tx = 0;
        int active = 1;

        while (active && current_time < T[n]) {
            real wait = exponential_rng(lambda_n);
            if (current_time + wait > T[n]) {
                break;
            } else {
                current_time += wait;
                x += 1;
                tx = current_time;
                if (bernoulli_rng(p_n)) {
                    active = 0;
                }
            }
        }
        X_rep[n] = x;
        t_x_rep[n] = tx;
    }
}
'''
Code
data = {
    'N': len(p1x),
    'T': T.flatten().tolist(),
}

stan_model = Stan(stan_file='stan_models/bg-nbd-prior', stan_code=stan_prior_code)
prior_fit = stan_model.sample(data=data)
                                                                                                                                                                                                                                                                                                                                
Code
prior_samples = prior_fit.stan_variable('X_rep').astype(int)

# Compute prior frequency distribution
max_purch = 9
prior_freq_counts = np.zeros((prior_samples.shape[0], max_purch + 1))
for i in range(prior_samples.shape[0]):
    counts = np.bincount(prior_samples[i], minlength=max_purch + 1)
    prior_freq_counts[i] = counts[:max_purch + 1] / data['N']

mean_frequency = prior_freq_counts.mean(axis=0)
hdi = az.hdi(prior_freq_counts, hdi_prob=0.89)

observed_counts = np.bincount(p1x.flatten().astype(int), minlength=max_purch + 1)
observed_frequency = observed_counts[:max_purch + 1] / data['N']

plt.clf()
plt.bar(np.arange(max_purch+1)+0.2, mean_frequency, width=0.4, label='Estimated', color='white', edgecolor='black', linewidth=0.5, yerr=[mean_frequency - hdi[:,0], hdi[:,1] - mean_frequency])
plt.bar(np.arange(max_purch+1)-0.2,  observed_frequency, 0.4, label='Observed', color='black')
plt.xlabel(f"Number of Repeat Purchases (0-{max_purch+1}+)")
plt.ylabel("% of Customer Population")
plt.title("Prior Predictive Check: Customer Purchase Frequency")
plt.xticks(ticks=np.arange(max_purch+1), labels=[f'{i}' if i < max_purch else f'{max_purch+1}+' for i in range(max_purch+1)])
plt.legend();
/var/folders/0s/z9xp988n3j78zfjwg3y616x00000gn/T/ipykernel_39543/3389536701.py:1: RuntimeWarning:

invalid value encountered in cast

Posterior Predictive Check

stan_post_code = '''
data {
    int<lower=0> N;               
    array[N] int<lower=0> X;      
    vector<lower=0>[N] T;         
    vector<lower=0>[N] t_x;        
}

parameters {
    real<lower=0> r;                         
    real<lower=0> alpha;                     
    real<lower=0, upper=1> phi_dropout;      
    real<lower=1> kappa_dropout;             
}

transformed parameters {
    real a = phi_dropout * kappa_dropout;       
    real b = (1 - phi_dropout) * kappa_dropout; 
}

model {
    // Priors:
    r ~ weibull(2, 1);                
    alpha ~ weibull(2, 10);           
    phi_dropout ~ uniform(0,1);       
    kappa_dropout ~ pareto(1,1);      

    for (n in 1:N) {
        int x = X[n];                 
        real tx = t_x[n];              
        real t = T[n];                

        if (x == 0) {
            target += r * (log(alpha) - log(alpha + t));
        } else {
            real beta_term1 = lbeta(a, b + x) - lbeta(a, b);  
            real gamma_term = lgamma(r + x) - lgamma(r);       
            real term1 = gamma_term + beta_term1 + r * log(alpha) - (r + x) * log(alpha + t);
            
            real beta_term2 = lbeta(a + 1, b + x - 1) - lbeta(a, b);
            real term2 = gamma_term + beta_term2 + r * log(alpha) - (r + x) * log(alpha + tx);
            
            target += log_sum_exp(term1, term2);
        }
    }
}

generated quantities {
    array[N] int X_rep;
    vector[N] t_x_rep;

    for (n in 1:N) {
        real lambda_n = gamma_rng(r, alpha);
        real p_n = beta_rng(a, b);
        real current_time = 0;
        int x = 0;
        real tx = 0;
        int active = 1;

        while (active && current_time < T[n]) {
            real wait = exponential_rng(lambda_n);
            if (current_time + wait > T[n]) {
                break;
            } else {
                current_time += wait;
                x += 1;
                tx = current_time;
                if (bernoulli_rng(p_n)) {
                    active = 0;
                }
            }
        }
        X_rep[n] = x;
        t_x_rep[n] = tx;
    }
}
'''
Code
data = {
    'N': len(p1x),
    'X': p1x.flatten().astype(int).tolist(),
    'T': T.flatten().tolist(),
    't_x': t_x.flatten().tolist()
}

stan_model = Stan(stan_file='stan_models/bg-nbd-post', stan_code=stan_post_code)
post_preds = stan_model.sample(data=data)
                                                                                                                                                                                                                                                                                                                                
Code
posterior = az.from_cmdstanpy(post_preds, posterior_predictive=['X_rep', 't_x_rep'])
posterior_samples = post_preds.stan_variable('X_rep').astype(int) 

# Compute frequency distribution
max_purch = 9
frequency_counts = np.zeros((posterior_samples.shape[0], max_purch + 1))
for i in range(posterior_samples.shape[0]):
    counts = np.bincount(posterior_samples[i], minlength=max_purch + 1)
    frequency_counts[i] = counts[:max_purch + 1] / data['N']

# Calculate mean and HDI
mean_frequency = frequency_counts.mean(axis=0)
hdi = az.hdi(frequency_counts, hdi_prob=0.89)

observed_counts = np.bincount(p1x.flatten().astype(int), minlength=max_purch + 1)
observed_frequency = observed_counts[:max_purch + 1] / data['N']

plt.clf()
plt.bar(np.arange(max_purch+1)+0.2, mean_frequency, width=0.4, label='Estimated', color='white', edgecolor='black', linewidth=0.5, yerr=[mean_frequency - hdi[:,0], hdi[:,1] - mean_frequency])
plt.bar(np.arange(max_purch+1)-0.2, observed_frequency, width=0.4, label='Observed', color='black')
plt.xlabel("Number of Repeat Purchases (0-10+)")
plt.ylabel("% of Customer Population")
plt.title("Posterior Predictive Check: Customer Purchase Frequency")
plt.xticks(ticks=np.arange(max_purch+1), labels=[f'{i}' if i < max_purch else f'{max_purch+1}+' for i in range(max_purch+1)])
plt.legend();

Applications

Now that you have fitted the model, we can use it to make predictions. For example, we can predict the expected probability of a customer being alive as a function of time (steps).

Expected Number of Purchases

Let us take a sample of users:

Code
example_customer_ids = [1, 6, 10, 18, 45, 1412]

p1x_sample = p1x[example_customer_ids]
t_x_sample = t_x[example_customer_ids]
T_sample = T[example_customer_ids]

pl.DataFrame({
    'cust_id': example_customer_ids,
    'p1x': p1x_sample.astype(int),
    't_x': t_x_sample.round(2),
    'T': T_sample.round(2)
})
shape: (6, 4)
cust_id p1x t_x T
i64 i64 f64 f64
1 1 1.71 38.86
6 1 5.0 38.86
10 5 24.43 38.86
18 3 28.29 38.71
45 12 34.43 38.57
1412 14 30.29 31.57

Observe that the last two customers are frequent buyers as compared to the others.

We will use the following forward-looking expression to obtain the expected number of transactions in a future period of length \(t\) for an individual with past observed behavior \(\left(X=x, t_{x}, T\right)\).

\[ \begin{align*} & E\left(Y(t) \mid X=x, t_{x}, T, r, \alpha, a, b\right)= \\ & \qquad \frac{\frac{a+b+x-1}{a-1}\left[1-\left(\frac{\alpha+T}{\alpha+T+t}\right)^{r+x}{ }_{2} F_{1}\left(r+x, b+x ; a+b+x-1 ; \frac{t}{\alpha+T+t}\right)\right]}{1+\delta_{x>0} \frac{a}{b+x-1}\left(\frac{\alpha+T}{\alpha+t_{x}}\right)^{r+x}} \end{align*} \]

Code
def expected_purchases(x, t_x, T, t, r, alpha, a, b):
    '''
    Compute the expected number of future purchases across future *t* 
    time periods given recency *t_x*, frequency *x*, and age of the customer *T* for each customer.
    '''
    h2f1_cust = hyp2f1(r+x, b+x, a+b+x-1, t/(alpha + T + t))
    return (
        (a + b + x - 1) / (a - 1) * (1 - ((alpha + T) / (alpha + T + t))**(r + x) * h2f1_cust) /
        (1 + (x > 0) * a / (b + x - 1) * ((alpha + T) / (alpha + t_x))**(r + x))
    )
Code
future_t = np.arange(91) # Future Periods to analyze
params = [post_preds.stan_variable(param).reshape(-1,1) for param in ['r', 'alpha', 'a', 'b']]
exp_purch = expected_purchases(p1x_sample, t_x_sample, T_sample, future_t.reshape(-1,1,1), *params) # shape (91, 4000, 6)

fig, axes = plt.subplots(nrows=6, ncols=1, figsize=(10, 15), sharex=True, sharey=True,layout="constrained")

for i, cust_id in enumerate(example_customer_ids):
    ax = axes[i]
    customer_data = exp_purch[:, :, i]  # Shape (91, 4000)
    mean_purchases = customer_data.mean(axis=1) # Shape (91,)
    pi = np.quantile(customer_data, q=[0.055, 0.945], axis=1)  # Shape (2, 91)
    
    ax.plot(future_t, mean_purchases, color='k', label='Mean')
    ax.fill_between(future_t, pi[0,:], pi[1,:], color='k', alpha=0.2, label='89% PI')
    ax.set_title(f'Customer ID: {cust_id}')
    ax.set_ylabel('Expected Purchases')
    if i == 0: ax.legend()

axes[-1].set_xlabel('Future Time Period')
plt.show()

Note that the frequent buyers are expected to make more purchases in the future.

Probability of a Customer Being Alive

We will implement the following expression as a Python function:

\[ P(\text{alive} \mid x, t_x, T, r, \alpha, a, b) = \begin{cases} 1 & x = 0 \\ \frac{1}{1 + \frac{a}{b+x-1} \left( \frac{\alpha + T}{\alpha + t_x} \right)^{r+x}} & x > 0 \end{cases} \]

Code
from scipy.special import expit

def expected_probability_alive(x, t_x, T, r, alpha, a, b):
    '''
    Compute the probability a customer with history recency *t_x*, 
    frequency *x*, and age of the customer *T* is currently active.
    '''
    log_div = (r + x) * np.log((alpha + T) / (alpha + t_x)) + np.log(
        a / (b + np.maximum(x, 1) - 1)
    )    

    return np.where(x == 0, 1.0, expit(-log_div))

For \(x = 0\):
The code returns 1.0 directly (np.where(x == 0, 1.0, ...)), so \(P(\text{alive}) = 1\) when no purchases are observed.

For \(x > 0\):
log_div computes:
\[ \log\left(\frac{a}{b + x - 1}\right) + (r + x) \log\left(\frac{\alpha + T}{\alpha + t_x}\right) \]

expit(-log_div) computes:
\[ \frac{1}{1 + e^{\log\_div}} = \frac{1}{1 + \frac{a}{b+x-1} \left( \frac{\alpha + T}{\alpha + t_x} \right)^{r+x}} \]

We now look into the probability of a customer being alive for the next 90 periods:

Code
future_t = np.arange(91)
future_alive = expected_probability_alive(p1x_sample, t_x_sample, (T_sample + future_t.reshape(-1,1,1)), *params)

fig, axes = plt.subplots(nrows=6, ncols=1, figsize=(10, 15), sharex=True, sharey=True,layout="constrained")

for i, cust_id in enumerate(example_customer_ids):
    ax = axes[i]
    customer_data = future_alive[:, :, i]  # Shape (91, 4000)
    mean_palive = customer_data.mean(axis=1) # Shape (91,)
    pi = np.quantile(customer_data, q=[0.055, 0.945], axis=1)  # Shape (2, 91)
    
    ax.plot(future_t, mean_palive, color='k', label='Mean')
    ax.fill_between(future_t, pi[0,:], pi[1,:], color='k', alpha=0.2, label='89% PI')
    ax.set_title(f'Customer ID: {cust_id}')
    ax.set_ylabel('Probability Alive')
    if i == 0: ax.legend()

axes[-1].set_xlabel('Future Time Period')
plt.show()

Here are some general remarks:

  • These plots assume no future purchases.

  • The decay probability is not the same as it depends in the purchase history of the customer.

  • The probability of being alive is always decreasing as we are assuming there is no change in the other parameters.

  • These probabilities are always non-negative, as expected.

For the frequent buyers, the probability of being alive drops very fast as we are assuming no future purchases. It is very important to keep this in mind when interpreting the results.

Probability of a Customer Making Zero Purchases in a time range

Code
from scipy.special import gammaln, betaln, factorial, logsumexp

def expected_probability_y_purchase(x, t_x, T, y, t, r, alpha, a, b):
    """
    Compute P(Y(t)=y | X=x,t_x,T,r,alpha,a,b).
    """
    alpha_T = alpha + T
    alpha_t_x = alpha + t_x
    alpha_T_t = alpha + T + t
    
    # Compute log_A (Equation 35)
    log_A = (
        betaln(a + 1, b + x - 1)
        - betaln(a, b)
        + gammaln(r + x)
        - gammaln(r)
        + r * np.log(alpha)
        - (r + x) * np.log(alpha_t_x)
    )
    
    # Compute log_B (Equation 36)
    log_B = (
        betaln(a, b + x + y)
        - betaln(a, b)
        + gammaln(r + x + y)
        - gammaln(r)
        - gammaln(y + 1)  # = log(factorial(y))
        + r * np.log(alpha)
        + y * np.log(t)
        - (r + x + y) * np.log(alpha_T_t)
    )
    
    # Compute log_C (Equation 37) if y > 0
    if y > 0:
        j = np.arange(y)
        log_terms = (
            gammaln(r + x + j)
            - gammaln(r)
            - gammaln(j + 1)
            + r * np.log(alpha)
            + j * np.log(t)
            - (r + x + j) * np.log(alpha_T_t)
        )
        log_sum = logsumexp(log_terms, axis=-1)  # Sum over j
        
        log_C_part1 = (
            betaln(a + 1, b + x + y - 1)
            - betaln(a, b)
            + gammaln(r + x)
            - gammaln(r)
            + r * np.log(alpha)
            - (r + x) * np.log(alpha_T)
        )
        
        log_C_part2 = (
            betaln(a + 1, b + x + y - 1)
            - betaln(a, b)
            + log_sum
        )
        
        # Compute log(exp(log_C_part1) - exp(log_C_part2)) safely
        log_diff = log_C_part2 - log_C_part1
        log_C = np.where(
            log_diff < 0,
            log_C_part1 + np.log1p(-np.exp(log_diff)),
            -np.inf
        )
    else:
        log_C = -np.inf
    
    # Compute numerator terms
    mask_A = (x > 0) & (y == 0)
    mask_C = (y > 0)
    
    log_term_A = np.where(mask_A, log_A, -np.inf)
    log_term_B = log_B
    log_term_C = np.where(mask_C, log_C, -np.inf)
    
    # Combine numerator terms using logsumexp
    log_numerator = logsumexp(
        [log_term_A, log_term_B, log_term_C],
        axis=0
    )
    
    # Compute denominator (BG/NBD likelihood L from Equation 11)
    log_L_term1 = (
        betaln(a, b + x)
        - betaln(a, b)
        + gammaln(r + x)
        - gammaln(r)
        + r * np.log(alpha)
        - (r + x) * np.log(alpha_T)
    )
    
    log_L_term2 = (
        betaln(a + 1, b + x - 1)
        - betaln(a, b)
        + gammaln(r + x)
        - gammaln(r)
        + r * np.log(alpha)
        - (r + x) * np.log(alpha_t_x)
    )
    
    log_L = np.where(
        x > 0,
        logsumexp([log_L_term1, log_L_term2], axis=0),
        log_L_term1
    )
    
    return np.exp(log_numerator - log_L)

We now look into the probability of a customer making \(0\) purchases between now, and the next periods between 0 and 30.

Code
# future_t = np.arange(31) # Future Periods to analyze
# expected_probability_zero_purchases = expected_probability_y_purchase(
#     p1x_sample, t_x_sample, T_sample, np.array([0]).reshape(1,-1,1), future_t.reshape(-1,1,1), *params
# )
# expected_probability_zero_purchases

Appendix

In this appendix, we derive the expressions for \(E[X(t)]\) and \(E(Y(t) | X = x, t_x, T)\). Central to these derivations is Euler’s integral for the Gaussian hypergeometric function:

\[ {}_{2}F_{1}(a, b; c; z) = \frac{1}{B(b, c - b)} \int_{0}^{1} t^{b-1} (1 - t)^{c - b - 1} (1 - zt)^{-a} dt, \quad c > b. \]

Derivation of \(E[X(t)]\)

To arrive at an expression for \(E[X(t)]\) for a randomly-chosen customer, we need to take the expectation of (5) over the distribution of \(\lambda\) and \(p\). First, we take the expectation with respect to \(\lambda\), giving us:

\[ E(X(t) | r, \alpha, p) = \frac{1}{p} - \frac{\alpha^{r}}{p (\alpha + p t)^{r}}. \]

The next step is to take the expectation of this over the distribution of \(p\). We first evaluate:

\[ \int_{0}^{1} \frac{1}{p} \frac{p^{a - 1} (1 - p)^{b - 1}}{B(a, b)} dp = \frac{a + b - 1}{a - 1}. \]

Next, we evaluate:

\[ \int_{0}^{1} \frac{\alpha^{r}}{p (\alpha + p t)^{r}} \frac{p^{a - 1} (1 - p)^{b - 1}}{B(a, b)} dp = \alpha^{r} \frac{1}{B(a, b)} \int_{0}^{1} p^{a - 2} (1 - p)^{b - 1} (\alpha + p t)^{-r} dp. \]

Letting \(q = 1 - p\) (which implies \(dp = -dq\)):

\[ = \left(\frac{\alpha}{\alpha + t}\right)^{r} \frac{1}{B(a, b)} \int_{0}^{1} q^{b - 1} (1 - q)^{a - 2} \left(1 - \frac{t}{\alpha + t} q\right)^{-r} dq. \]

Recalling Euler’s integral for the Gaussian hypergeometric function:

\[ = \left(\frac{\alpha}{\alpha + t}\right)^{r} \frac{B(a - 1, b)}{B(a, b)} {}_{2}F_{1}\left(r, b; a + b - 1; \frac{t}{\alpha + t}\right). \]

It follows that:

\[ E(X(t) | r, \alpha, a, b) = \frac{a + b - 1}{a - 1} \left[1 - \left(\frac{\alpha}{\alpha + t}\right)^{r} {}_{2}F_{1}\left(r, b; a + b - 1; \frac{t}{\alpha + t}\right)\right]. \]

Derivation of \(E(Y(t) | X = x, t_x, T)\)

Let the random variable \(Y(t)\) denote the number of purchases made in the period \((T, T + t]\). We are interested in computing the conditional expectation \(E(Y(t) | X = x, t_x, T)\), the expected number of purchases in the period \((T, T + t]\) for a customer with purchase history \(X = x, t_x, T\).

If the customer is active at \(T\), it follows from (5) that:

\[ E(Y(t) | \lambda, p) = \frac{1}{p} - \frac{1}{p} e^{-\lambda p t}. \]

What is the probability that a customer is active at \(T\)? Given our assumption that all customers are active at the beginning of the initial observation period, a customer cannot drop out before he has made any transactions; therefore:

\[ P(\text{active at } T | X = 0, T, \lambda, p) = 1. \]

For the case where purchases were made in the period \((0, T]\), the probability that a customer with purchase history \((X = x, t_x, T)\) is still active at \(T\), conditional on \(\lambda\) and \(p\), is simply the probability that he did not drop out at \(t_x\) and made no purchase in \((t_x, T]\), divided by the probability of making no purchases in this same period. Recalling that this second probability is simply the probability that the customer became inactive at \(t_x\), plus the probability he remained active but made no purchases in this interval, we have:

\[ P(\text{active at } T | X = x, t_x, T, \lambda, p) = \frac{(1 - p) e^{-\lambda (T - t_x)}}{p + (1 - p) e^{-\lambda (T - t_x)}}. \]

Multiplying this by \(\frac{(1 - p)^{x - 1} \lambda^{x} e^{-\lambda t_x}}{(1 - p)^{x - 1} \lambda^{x} e^{-\lambda t_x}}\) gives us:

\[ P(\text{active at } T | X = x, t_x, T, \lambda, p) = \frac{(1 - p)^{x} \lambda^{x} e^{-\lambda T}}{L(\lambda, p | X = x, t_x, T)}, \]

where the expression for \(L(\lambda, p | X = x, t_x, T)\) is given in (3). (Note that when \(x = 0\), the expression given in (A2) equals 1.)

Multiplying (A1) and (A2) yields:

\[ \begin{align*} E(Y(t) | X = x, t_x, T, \lambda, p) &= \frac{(1 - p)^{x} \lambda^{x} e^{-\lambda T} \left(\frac{1}{p} - \frac{1}{p} e^{-\lambda p t}\right)}{L(\lambda, p | X = x, t_x, T)} \\ &= \frac{p^{-1} (1 - p)^{x} \lambda^{x} e^{-\lambda T} - p^{-1} (1 - p)^{x} \lambda^{x} e^{-\lambda (T + p t)}}{L(\lambda, p | X = x, t_x, T)}. \end{align*} \]

(Note that this reduces to (A1) when \(x = 0\), which follows from the result that a customer who made zero purchases in the time period \((0, T]\) must be assumed to be active at time \(T\).)

As the transaction rate \(\lambda\) and dropout probability \(p\) are unobserved, we compute \(E(Y(t) | X = x, t_x, T)\) for a randomly chosen customer by taking the expectation of (A3) over the distribution of \(\lambda\) and \(p\), updated to take account of the information \(X = x, t_x, T\):

\[ \begin{align*} E(Y(t) | X = x, t_x, T, r, \alpha, a, b) &= \int_{0}^{1} \int_{0}^{\infty} E(Y(t) | X = x, t_x, T, \lambda, p) \\ &\cdot f(\lambda, p | r, \alpha, a, b, X = x, t_x, T) d\lambda dp. \end{align*} \]

By Bayes theorem, the joint posterior distribution of \(\lambda\) and \(p\) is given by:

\[ f(\lambda, p | r, \alpha, a, b, X = x, t_x, T) = \frac{L(\lambda, p | X = x, t_x, T) f(\lambda | r, \alpha) f(p | a, b)}{L(r, \alpha, a, b | X = x, t_x, T)}. \]

Substituting (A3) and (A5) in (A4), we get:

\[ E(Y(t) | X = x, t_x, T, r, \alpha, a, b) = \frac{A - B}{L(r, \alpha, a, b | X = x, t_x, T)'} \]

where:

\[ \begin{align*} A &= \int_{0}^{1} \int_{0}^{\infty} p^{-1} (1 - p)^{x} \lambda^{x} e^{-\lambda T} f(\lambda | r, \alpha) f(p | a, b) d\lambda dp \\ &= \frac{B(a - 1, b + x)}{B(a, b)} \frac{\Gamma(r + x) \alpha^{r}}{\Gamma(r) (\alpha + T)^{r + x}} \end{align*} \]

and:

\[ \begin{align*} B &= \int_{0}^{1} \int_{0}^{\infty} p^{-1} (1 - p)^{x} \lambda^{x} e^{-\lambda (T + p t)} f(\lambda | r, \alpha) f(p | a, b) d\lambda dp \\ &= \int_{0}^{1} \frac{p^{a - 2} (1 - p)^{b + x - 1}}{B(a, b)} \left\{\int_{0}^{\infty} \frac{\alpha^{r} \lambda^{r + x - 1} e^{-\lambda (\alpha + T + p t)}}{\Gamma(r)} d\lambda\right\} dp \\ &= \frac{\Gamma(r + x) \alpha^{r}}{\Gamma(r) B(a, b)} \int_{0}^{1} p^{a - 2} (1 - p)^{b + x - 1} (\alpha + T + p t)^{-(r + x)} dp. \end{align*} \]

Letting \(q = 1 - p\) (which implies \(dp = -dq\)):

\[ = \frac{\Gamma(r + x) \alpha^{r}}{\Gamma(r) B(a, b) (\alpha + T + t)^{r + x}} \cdot \int_{0}^{1} q^{b + x - 1} (1 - q)^{a - 2} \left(1 - \frac{t}{\alpha + T + t} q\right)^{-(r + x)} dq. \]

Recalling Euler’s integral for the Gaussian hypergeometric function:

\[ = \frac{B(a - 1, b + x)}{B(a, b)} \frac{\Gamma(r + x) \alpha^{r}}{\Gamma(r) (\alpha + T + t)^{r + x}} \cdot {}_{2}F_{1}\left(r + x, b + x; a + b + x - 1; \frac{t}{\alpha + T + t}\right). \]

Substituting (6), (A7), and (A8) in (A6) and simplifying, we get:

\[ E(Y(t) | X = x, t_x, T, r, \alpha, a, b) = \frac{\frac{a + b + x - 1}{a - 1} \left[1 - \left(\frac{\alpha + T}{\alpha + T + t}\right)^{r + x} {}_{2}F_{1}\left(r + x, b + x; a + b + x - 1; \frac{t}{\alpha + T + t}\right)\right]}{1 + \delta_{x > 0} \frac{a}{b + x - 1} \left(\frac{\alpha + T}{\alpha + t_x}\right)^{r + x}}. \]